197 research outputs found

    Kansrijk beslissen

    Get PDF
    Stel: u, of één van uw naasten, wordt getroffen door een hartinfarct. In dat geval sterft een deel van het hart doordat er te weinig bloed naartoe stroomt. De oorzaak is vaak een stolsel in één van de kransslagaders. U wordt met spoed naar het ziekenhuis gebracht. Laten we aannemen dat dit het Erasmus MC is. De Eerste Hulp bevindt zich - heel toepasselijk- in het hart van dit complex. U wilt uiteraard zo goed mogelijk behandeld worden. Wat betekent ‘zo goed mogelijk’? U wilt díe testen en behandelingen ondergaan die uiteindelijk leiden tot de beste prognose. Prognose kunnen we uitdrukken in de kans om het hartinfarct in relatief goede gezondheid te overleven. Omgekeerd geformuleerd willen we het risico op overlijden door het infarct zo klein mogelijk maken

    Validation in prediction research: the waste by data splitting

    Get PDF
    Accurate prediction of medical outcomes is important for diagnosis and prognosis. The standard requirement in major medical journals is nowadays that validity outside the development sample needs to be shown. Is such data splitting an example of a waste of resources? In large samples, interest should shift to assessment of heterogeneity in model performance across settings. In small samples, cross-validation and bootstrapping are more efficient approaches. In conclusion, random data splitting should be abolished for validation of prediction models

    Towards better clinical prediction models

    Get PDF
    Clinical prediction models provide risk estimates for the presence of disease (diagnosis) or an event in the future course of disease (prognosis) for individual patients. Although publications that present and evaluate such models are becoming more frequent, the methodology is often suboptimal. We propose that seven steps should be considered in developing prediction models: (i) consideration of the research question and initial data inspection; (ii) coding of predictors; (iii) model specification; (iv) model estimation; (v) evaluation of model performance; (vi) internal validation; and (vii) model presentation. The validity of a prediction model is ideally assessed in fully independent data, where we propose four key measures to evaluate model performance: calibration-in-the-large, or the model intercept (A); calibration slope (B); discrimination, with a concordance statistic (C); and clinical usefulness, with decision-curve analysis (D). As an application, we develop and validate prediction models for 30-day mortality in patients with an acute myocardial infarction. This illustrates the usefulness of the proposed framework to strengthen the methodological rigour and quality for prediction models in cardiovascular research

    Prognostic Modeling for Clinical Decision Making: theory and applications

    Get PDF
    Clinical decision making is concerned with choices made for individual patients, with a focus on diagnosis, therapy and prognosis. These choices can be supported by cll1pirical research. Diagnostic research includes the assessment of test characteristics, such as sensitivity and specificity, while therapeutic research is preferably performed in randomized clinical trials. Both diagnostic and therapeutic decisions aim to improve the prognosis for the patient. Prognosis is therefore at the heart of clinical decision making

    Monitoring prognosis in severe traumatic brain injury

    Get PDF
    The choice of disease-specific versus generic scales is common to many fields of medicine. In the area of traumatic brain injury, evidence is coming forward that disease-specific prognostic models and disease-specific scoring systems are preferable in the intensive care setting. In monitoring prognosis, the use of a calibration belt in validation studies potentially provides accurate and intuitively attractive insight into performance. This approach deserves further empirical evaluation of its added value as well as its limitations

    The number of subjects per variable required in linear regression analyses

    Get PDF
    Objectives To determine the number of independent variables that can be included in a linear regression model. Study Design and Setting We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression coefficients and standard errors, on the empirical coverage of estimated confidence intervals, and on the accuracy of the estimated R2 of the fitted model. Results A minimum of approximately two SPV tended to result in estimation of regression coefficients with relative bias of less than 10%. Furthermore, with this minimum number of SPV, the standard errors of the regression coefficients were accurately estimated and estimated confidence intervals had approximately the advertised coverage rates. A much higher number of SPV were necessary to minimize bias in estimating the model R2, although adjusted R2 estimates behaved well. The bias in estimating the model R2 statistic was inversely proportional to the magnitude of the proportion of variation explained by the population regression model. Conclusion Linear regression models require only two SPV for adequate estimation of regression coefficients, standard errors, and confidence intervals

    Graphical assessment of internal and external calibration of logistic regression models by using loess smoothers

    Get PDF
    Predicting the probability of the occurrence of a binary outcome or condition is important in biomedical research. While assessing discrimination is an essential issue in developing and validating binary prediction models, less attention has been paid to methods for assessing model calibration. Calibration refers to the degree of agreement between observed and predicted probabilities and is often assessed by testing for lack-of-fit. The objective of our study was to examine the ability of graphical methods to assess the calibration of logistic regression models. We examined lack of internal calibration, which was related to misspecification of the logistic regression model, and external calibration, which was related to an overfit model or to shrinkage of the linear predictor. We conducted an extensive set of Monte Carlo simulations with a locally weighted least squares regression smoother (i.e., the loess algorithm) to examine the ability of graphical methods to assess model calibration. We found that loess-based methods were able to provide evidence of moderate departures from linearity and indicate omission of a moderately strong interaction. Misspecification of the link function was harder to detect. Visual patterns were clearer with higher sample sizes, higher incidence of the outcome, or higher discrimination. Loess-based methods were also able to identify the lack of calibration in external validation samples when an overfit regression model had been used. In conclusion, loess-based smoothing methods are adequate tools to graphically assess calibration and merit wider application

    A decision-analytic approach to define poor prognosis patients: A case study for non-seminomatous germ cell cancer patients

    Get PDF
    Background. Classification systems may be useful to direct more aggressive treatment to cancer patients with a relatively poor prognosis. The definition of 'poor prognosis' often lacks a formal basis. We propose a decision analytic approach to weigh benefits and harms explicitly to define the treatment threshold for more aggressive treatment. This approach is illustrated by a case study in advanced testicular cancer, where patients with a high risk of mortality under standard treatment may be eligible for high-dose chemotherapy with stem cell support, which is currently defined by the IGCC classification. Methods. We use

    Predictive accuracy of novel risk factors and markers: A simulation study of the sensitivity of different performance measures for the Cox proportional hazards regression model

    Get PDF
    Predicting outcomes that occur over time is important in clinical, population health, and health services research. We compared changes in different measures of performance when a novel risk factor or marker was added to an existing Cox proportional hazards regression model. We performed Monte Carlo simulations for common measures of performance: concordance indices (c, including various extensions to survival outcomes), Royston's D index, R2-type measures, and Chambless' adaptation of the integrated discrimination improvement to survival outcomes. We found that the increase in performance due to the inclusion of a risk factor tended to decrease as the performance of the reference model increased. Moreover, the increase in performance increased as the hazard ratio or the prevalence of a binary risk factor increased. Finally, for the concordance indices and R2-type measures, the absolute increase in predictive accuracy due to the inclusion of a risk factor was greater when the observed event rate was higher (low censoring). Amongst the different concordance indices, Chambless and Diao's c-statistic exhibited the greatest increase in predictive accuracy when a novel risk factor was added to an existing model. Amongst the different R2-type measures, O'Quigley et al.'s modification of Nagelkerke's R2 index and Kent and O'Quigley's Ï w, a 2 displayed the greatest sensitivity to the addition of a novel risk factor or marker. These methods were then applied to a cohort of 8635 patients hospitalized with heart failure to examine the added benefit of a point-based scoring system for predicting mortality after initial adjustment with patient age alone
    corecore